Azure DevOps and Automation
David Watson Cloud Solution Architect
Welcome everyone. Today we're doing a 2-hour deep dive into Azure DevOps YAML Pipelines.
We will focus exclusively on YAML pipelines — not the classic UI-based editor.
By the end you'll be able to author, structure, and manage production-grade CI/CD pipelines.
Agenda
Intro to Azure DevOps
What is Azure DevOps?
Core Services Overview
Repos, Boards & Pipelines
Pipeline Fundamentals
Azure Pipelines
Pipeline Structure
Stages, Jobs & Steps
Tasks & Scripts
Triggers
Variables & Expressions
Advanced Topics
Agents & Agent Pools
Service Connections & Artifacts
Environments & Approvals
Templates
Best Practices
Here's our roadmap for the session.
Part 1 introduces Azure DevOps as a platform — what it is and the core services it provides.
Part 2 covers the fundamental building blocks of YAML pipelines — structure, triggers, and variables.
Part 3 goes deeper into agents, approvals, templates, and best practices.
Introduction to Azure DevOps
Azure DevOps
Azure Boards
Agile tools to plan, track, and discuss work across your teams
Azure Pipelines
Build, test, and deploy with CI/CD for any language, platform, and cloud
Azure Repos
Unlimited cloud-hosted Git repos with pull requests and advanced file management
Azure Test Plans
Manual and exploratory testing tools to ship with confidence
Azure Artifacts
Create, host, and share packages with your team and CI/CD pipelines
Azure DevOps is a suite of development services covering the full DevOps lifecycle.
Azure Boards provides agile planning with Kanban boards, backlogs, and sprints.
Azure Pipelines is the CI/CD engine — this is what we'll deep dive into today.
Azure Repos provides Git-based source control with pull requests.
Azure Test Plans provides manual and exploratory testing tools.
Azure Artifacts hosts packages like NuGet, npm, Maven, and Python.
All five services integrate seamlessly with each other.
Azure Boards
Track work with Kanban boards, backlogs, team dashboards, and custom reporting
Connected from idea to release
Track ideas at every stage with code changes linked to work items
Scrum ready
Built-in scrum boards and planning tools for sprints, stand-ups, and planning
Project insights
Analytics tools and dashboard widgets for project health and status
Azure Boards provides a rich set of agile tools. Kanban boards give you a visual workflow,
backlogs help you plan sprints, and work items link directly to code changes in repos and
builds in pipelines. The analytics and dashboard widgets give leadership visibility into
project progress without needing to dig into the details.
Azure Repos
Unlimited private Git repo hosting and TFVC support from hobby projects to the world's largest repositories
Works with your Git client
Securely connect and push code from any IDE, editor, or Git client
Web hooks and API integration
Add validations and extensions from the marketplace or build your own using web hooks and REST APIs
Semantic code search
Quickly find what you're looking for with code-aware search that understands classes and variables
Azure Repos provides unlimited private Git repositories. It works with any Git client
and supports branch policies, pull request reviews, and code search. TFVC is also
available for teams that need centralized version control. Web hooks and REST APIs
enable deep integration with your existing tools and workflows.
Azure Pipelines
Cloud-hosted pipelines for Linux, Windows and macOS, with unlimited minutes for open source
Any language, any platform, any cloud
Build, test, and deploy Node.js, Python, Java, .NET, and more across Linux, macOS, and Windows
Extensible
Community-built build, test, and deployment tasks plus hundreds of extensions from Slack to SonarCloud
Containers and Kubernetes
Build and push images to container registries, deploy to individual hosts or Kubernetes
Azure Pipelines is the CI/CD engine of Azure DevOps. It supports any language and platform,
runs jobs in parallel on Linux, macOS, and Windows, and deploys to Azure, AWS, GCP, or
on-premises. It's best-in-class for open source with unlimited build minutes and up to
10 free parallel jobs. The extensibility model lets you add community tasks or write your own.
Azure Test Plans
End-to-end traceability. Run tests, log defects, and track quality throughout your testing lifecycle
Capture rich data
Capture scenario data as you test to make defects actionable. Create test cases from exploratory sessions
Test across web and desktop
Complete scripted tests across desktop or web scenarios, on-premises or in the cloud
End-to-end traceability
Leverage the same test tools across engineers and user acceptance testing stakeholders
Azure Test Plans provides manual and exploratory testing tools with full traceability
back to requirements and builds. You can capture rich data during test sessions,
run scripted tests across web and desktop, and get visibility into test coverage
and quality metrics through charts and dashboards.
Azure Artifacts
Create and share Maven, npm, and NuGet package feeds — fully integrated into CI/CD pipelines
Manage all package types
Universal artifact management for Maven, npm, and NuGet from public and private sources
Add packages to any pipeline
Share packages with built-in CI/CD, versioning, and testing
Share code efficiently
Easily share code across small teams and large enterprises
Azure Artifacts provides package management for Maven, npm, NuGet, and Python packages.
You can create feeds from public and private sources, integrate them directly into your
CI/CD pipelines, and share packages across teams. It supports versioning, upstream sources,
and feed-level permissions for secure package distribution.
What are Azure Pipelines?
Cloud-hosted CI/CD service in Azure DevOps
Automates build, test, and deployment
Works with any language, platform, or cloud
Pipelines defined as code in YAML
Version-controlled alongside your source
Azure Pipelines is a fully managed CI/CD service. Unlike classic pipelines that use a GUI editor,
YAML pipelines are defined as code — meaning they live in your repo, go through code review,
and have full version history. This is the modern, recommended approach.
CI vs CD Pipelines
Continuous Integration (CI)
Triggered on every code change
Compiles & builds the application
Runs automated tests
Produces a versioned artifact
For IaC projects: validates & lints templates even without a traditional build
Continuous Delivery (CD)
Triggered after a successful CI build
Deploys artifacts to target environments
Promotes through stages (e.g. Dev → QA → Prod)
May include approval gates
Before we dive into Azure-specific features, let's clarify the two core pipeline concepts.
CI pipelines focus on integration — every time code is pushed, the pipeline builds the code,
runs tests, and produces a deployable artifact. The goal is fast feedback: catch bugs early.
CD pipelines take a successfully built artifact and deploy it across environments.
They typically promote through Dev, QA/Staging, and Production with approval gates in between.
Together, CI/CD forms an automated path from code commit to production deployment.
In Azure DevOps, a single YAML pipeline can handle both CI and CD as separate stages.
Why YAML over Classic?
YAML Pipelines
Version-controlled
Code review via PRs
Branching & merging
Templates & reuse
Classic Pipelines
GUI-based editor
No version control
Limited reusability
Being deprecated
Microsoft is moving away from classic pipelines. YAML is the future — it gives you
pipeline-as-code with all the benefits of source control. Today we focus exclusively on YAML.
Pipeline Structure
This diagram from Microsoft shows the hierarchy: a Pipeline contains Stages, Stages contain Jobs,
and Jobs contain Steps. A trigger tells the pipeline when to run. An agent executes each job.
We'll break down each of these concepts throughout the training.
Pipeline Structure
The simplest possible pipeline
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
This is the minimum viable pipeline. It triggers on the main branch, runs on
a Microsoft-hosted Ubuntu agent, and executes a single script step.
The file is typically named azure-pipelines.yml and lives in the repo root.
Full Pipeline Hierarchy
trigger:
- main
stages:
- stage: Build
jobs:
- job: BuildApp
pool:
vmImage: 'ubuntu-latest'
steps:
- task: DotNetCoreCLI@2
inputs:
command: 'build'
- stage: Deploy
dependsOn: Build
jobs:
- job: DeployApp
steps:
- script: echo Deploying...
Here's the full hierarchy: trigger → stages → jobs → steps.
When you have multiple stages, you create a multi-stage pipeline for CI/CD.
If you only need a single stage and single job, you can omit those keywords and
just write steps directly — Azure DevOps infers the structure.
Stages
Logical division of a pipeline (Build, Test, Deploy)
Run sequentially by default
Can define dependencies with dependsOn
Each stage can target different environments
stages:
- stage: Build
displayName: 'Build Stage'
- stage: Test
dependsOn: Build
- stage: Deploy
dependsOn: Test
Stages are the top-level organizational unit. A typical pattern is Build → Test → Deploy.
You can run stages in parallel by adjusting dependsOn or setting dependsOn to an empty array.
Jobs
A unit of work that runs on one agent
Jobs within a stage run in parallel by default
Use dependsOn to control ordering
Each job gets a fresh agent workspace
jobs:
- job: UnitTests
pool:
vmImage: 'ubuntu-latest'
steps:
- script: dotnet test
- job: IntegrationTests
dependsOn: UnitTests
steps:
- script: dotnet test --filter Category=Integration
Jobs run on agents. By default, jobs in the same stage run in parallel, which lets you
speed up your pipeline. Each job starts with a clean workspace - files don't carry over
between jobs unless you use artifacts.
Steps
The smallest unit of execution
Run sequentially within a job
Three types: script, task, checkout
Each step runs in its own process on the agent
steps:
- checkout: self # Check out source code
- script: npm install # Run a script
- task: PublishBuildArtifacts@1 # Run a task
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
Steps are the actual work items. The checkout step gets your source code —
this happens automatically by default. Scripts let you run command-line commands.
Tasks are pre-built, reusable building blocks from the marketplace or built-in.
Tasks
Pre-packaged, versioned automation units
Hundreds available in the Marketplace
Referenced as TaskName@Version
Inputs configured via the inputs property
steps:
- task: UseDotNet@2
inputs:
packageType: 'sdk'
version: '8.x'
- task: DotNetCoreCLI@2
inputs:
command: 'build'
projects: '**/*.csproj'
Tasks are the recommended way to perform common operations. They're versioned using the @ syntax
so you can pin to a major version. Microsoft provides built-in tasks for .NET, Node, Docker,
Azure deployments, and more. You can also install tasks from the Visual Studio Marketplace.
Script Steps
Three ways to run inline scripts
steps:
# Cross-platform (runs cmd on Windows, bash on Linux/Mac)
- script: echo Hello from script
displayName: 'Cross-platform script'
# Bash specifically
- bash: |
echo "Running on $(Agent.OS)"
./run-tests.sh
displayName: 'Bash script'
# PowerShell specifically
- powershell: |
Write-Host "Building project..."
dotnet build -c Release
dotnet build -c Release
displayName: 'PowerShell script'
You have three script keywords: script is cross-platform, bash forces bash,
and powershell runs PowerShell. Use the pipe character for multi-line scripts.
There's also pwsh for PowerShell Core which works cross-platform.
CI Triggers
Automatically run on code changes
# Simple — trigger on specific branches
trigger:
- main
- release/*
# Detailed — include/exclude branches and paths
trigger:
branches:
include:
- main
- feature/*
exclude:
- feature/experimental
paths:
include:
- src/**
exclude:
- docs/**
CI triggers fire when code is pushed. You can filter by branch and path.
Path filters are powerful — you can skip pipeline runs when only docs change.
Use 'trigger: none' to disable CI triggers entirely.
PR Triggers
Run validation on pull requests
pr:
branches:
include:
- main
- release/*
paths:
include:
- src/**
exclude:
- '*.md'
PR triggers validate pull requests before merging. Note that PR triggers are
configured in YAML for GitHub repos but are set via branch policies for
Azure Repos — this is a common gotcha. You can combine both CI and PR triggers
in the same YAML file.
Scheduled Triggers
Run pipelines on a schedule using cron syntax
schedules:
- cron: '0 2 * * Mon-Fri'
displayName: 'Weekday 2am build'
branches:
include:
- main
always: true # Run even if no changes
- cron: '0 8 * * Sun'
displayName: 'Weekly full test'
branches:
include:
- main
always: false # Only if changes exist
Scheduled triggers use standard cron syntax. The 'always' flag controls whether
the pipeline runs even when there are no code changes. Times are in UTC.
You can have multiple schedules targeting different branches.
Variables
Storing and using values in your pipeline
# Inline variables
variables:
buildConfiguration: 'Release'
dotnetVersion: '8.x'
steps:
- script: dotnet build -c $(buildConfiguration)
- task: UseDotNet@2
inputs:
version: $(dotnetVersion)
Variables let you define values once and reference them throughout the pipeline
using the dollar-parentheses syntax. This avoids repetition and makes
your pipelines easier to maintain.
Variable Scoping
Variables can be scoped at multiple levels
variables: # Pipeline-level
globalVar: 'shared'
stages:
- stage: Build
variables: # Stage-level
stageVar: 'build-only'
jobs:
- job: Compile
variables: # Job-level
jobVar: 'compile-only'
steps:
- script: |
echo $(globalVar) # Works
echo $(stageVar) # Works
echo $(jobVar) # Works
Variables can be set at pipeline, stage, or job level. A variable set at a higher
level is available to all lower levels. Variables set at the job level are only
available within that job. This scoping helps you organize configuration values.
Variable Groups & Secrets
Variable groups store values in the Azure DevOps UI
Link to Azure Key Vault for secrets
Secrets are masked in logs automatically
Referenced with the group keyword
variables:
- group: home-office-config # name of variable group defined in Azure DevOps
steps:
- script: echo $(app-name) # references variable 'myhello' from the variable group
Variable groups are managed in the Library section of Azure DevOps.
They're great for environment-specific config and secrets. You can link
a variable group to Azure Key Vault to pull secrets automatically.
Secret variables are never echoed to logs.
Predefined Variables
Azure DevOps automatically sets dozens of variables on every run
Provide context about the build, repo, agent, and environment
Referenced with $(Variable.Name) syntax
Available in YAML values, scripts, and task inputs
Grouped by prefix: Build.* , System.* , Agent.* , Pipeline.*
Azure DevOps automatically populates many variables before your pipeline even starts.
These give you metadata about the current run — which branch triggered it, the build number,
the agent OS, file paths, and more. You don't need to define them — they're just available.
They're grouped by prefix: Build for build info, System for system-level info, Agent for agent details,
and Pipeline for pipeline metadata. Let's look at the most useful ones.
Predefined Variables — Build & System
Variable Description Example Value
Build.SourceBranchFull ref of the triggering branch refs/heads/main
Build.SourceBranchNameShort branch name main
Build.BuildIdUnique numeric ID for this run 4827
Build.BuildNumberFormatted run name 20260217.3
Build.Repository.NameName of the source repository my-app
Build.SourceVersionCommit SHA that triggered the run a1b2c3d...
Build.ReasonWhy the build ran IndividualCI
System.TeamProjectName of the Azure DevOps project MyProject
Build.SourceBranch gives the full ref path while SourceBranchName gives just the last segment.
Build.BuildId is a unique number that always increments, great for tagging artifacts.
Build.Reason tells you what triggered the run — IndividualCI for a push, PullRequest for a PR,
Schedule for a scheduled run, Manual for someone clicking Run. System.TeamProject is useful
when templates are shared across projects.
Predefined Variables — Paths & Agent
Variable Description
Build.ArtifactStagingDirectoryLocal path for staging artifacts before publishing
Build.SourcesDirectoryLocal path where source code is checked out
System.DefaultWorkingDirectoryDefault working directory (same as sources dir)
Pipeline.WorkspaceRoot workspace directory for the pipeline
Agent.OSOperating system of the agent
Agent.MachineNameName of the agent machine
Agent.TempDirectoryTemp folder, cleaned after each job
Agent.ToolsDirectoryCached tools directory (e.g., .NET, Node)
Path variables are critical for scripts. Build.ArtifactStagingDirectory is where you copy files
before publishing as artifacts. Build.SourcesDirectory is where your repo is cloned.
Pipeline.Workspace is the root that contains sources, artifacts, and test results.
Agent.OS is useful for conditional steps — you can check if you're on Windows or Linux.
Agent.TempDirectory is cleaned after each job so it's safe for temporary files.
Conditions & Expressions
Control when stages, jobs, or steps run
steps:
- script: echo 'Running on main branch'
condition: eq(variables['Build.SourceBranch'],
'refs/heads/main')
- script: echo 'Previous step failed'
condition: failed()
stages:
- stage: DeployProd
condition: and(succeeded(),
eq(variables['Build.SourceBranch'],
'refs/heads/main'))
Conditions let you control execution flow. Common functions are succeeded(), failed(),
always(), eq(), ne(), and(), or(), and contains(). You can apply conditions at the
step, job, or stage level. By default everything has condition: succeeded().
What is an Agent?
A compute environment that executes your pipeline jobs
Pipelines define what to do — agents do the actual work
Each job runs on exactly one agent
Without an agent, your pipeline has no machine to run on
Agents are organised into agent pools for management
An agent is simply a piece of software installed on a machine that listens for jobs
from Azure DevOps. When a pipeline runs, Azure DevOps assigns each job to an available
agent from the specified pool. The agent checks out your code, runs the steps, and
reports results back. Think of it as the worker that actually does the building,
testing, and deploying. Agents are grouped into pools so you can manage capacity,
permissions, and capabilities centrally.
Agent Pools
A logical grouping of agents with shared permissions and capabilities
Pipelines specify a pool , not an individual agent
Azure DevOps assigns the next available agent from the pool
Manage capacity by adding or removing agents from a pool
Set permissions and pipeline access at the pool level
Agent pools are the unit of management — you never pick a specific agent in your YAML.
Instead you specify which pool to use and Azure DevOps handles the scheduling. This means
your pipeline doesn't depend on any single machine. If one agent is busy or offline,
another agent in the same pool picks up the work. Pools also let you control security —
you can restrict which pipelines are allowed to use a given pool.
Microsoft-hosted vs Self-hosted Agents
Microsoft-hosted
Fresh VM per job
Maintenance-free
Pre-installed tools
Limited customization
1 free parallel job (60 min/month)
Purchase additional parallel jobs
Self-hosted
You manage the machine
Persistent between runs
Full control over tools
Can access private networks
1 free parallel job (unlimited minutes)
You provide the infrastructure
Microsoft-hosted agents give you a clean VM every time — great for most scenarios.
Self-hosted agents are machines you install and maintain yourself. Use self-hosted
when you need special software, hardware, or network access that Microsoft-hosted
agents don't provide. Self-hosted agents can be VMs, physical machines, or containers.
On licensing: every Azure DevOps organization gets one free Microsoft-hosted parallel job
with 1800 minutes per month, and one free self-hosted parallel job with unlimited minutes.
You can purchase additional parallel jobs if you need more concurrency. Self-hosted agents
are free to run — you just pay for the infrastructure you provide (VMs, hardware, etc.).
Agents & Agent Pools
Azure DevOps
Microsoft-hosted Pool
Agent 1
Agent 2
Agent 3
Self-hosted Pool
Agent 1
Agent 2
Jobs are assigned to available agents from the specified pool
An agent is a computing environment that runs your pipeline jobs. Agents are organized into
agent pools. When a job runs, Azure DevOps assigns it to an available agent from the specified pool.
There are two types: Microsoft-hosted and self-hosted.
Specifying Agent Pools
# Microsoft-hosted — by VM image name
pool:
vmImage: 'ubuntu-latest' # or windows-latest, macos-latest
# Self-hosted — by pool name
pool:
name: 'MySelfHostedPool'
demands:
- npm
- Agent.OS -equals Linux
# Per-job pool selection
jobs:
- job: LinuxBuild
pool:
vmImage: 'ubuntu-latest'
- job: WindowsBuild
pool:
vmImage: 'windows-latest'
For Microsoft-hosted, you specify vmImage. Common choices are ubuntu-latest,
windows-latest, and macos-latest. For self-hosted pools, use the pool name
and optionally add demands to filter agents by capability. You can set the pool
at pipeline, stage, or job level — job-level overrides higher levels.
Environments
Named targets for deployment (Dev, Staging, Prod)
Track deployment history and status, with links to work items
Attach approvals and checks
Used with deployment jobs
Not generally tied to actual infrastructure — a logical concept only
jobs:
- deployment: DeployWeb
environment: 'Production'
strategy:
runOnce:
deploy:
steps:
- script: echo Deploying to production
Environments represent your deployment targets. They're created in the Azure DevOps
portal and referenced in your YAML. The deployment job type lets you use deployment
strategies like runOnce, rolling, or canary. Environments provide traceability —
you can see which commits are deployed where.
Important: an environment in Azure DevOps is a purely logical construct — it is NOT
directly linked to any actual Azure infrastructure (resource groups, subscriptions, etc.).
It's just a name with approvals and deployment history attached. The actual connection
to real infrastructure happens through service connections and the deployment tasks
in your pipeline steps. You could have an environment called "Production" that deploys
to any cloud, on-premises server, or even multiple targets — the environment itself
doesn't know or care where the deployment goes. Think of it as a governance and
traceability layer, not an infrastructure mapping.
Approvals & Checks
Pipeline runs
→
Stage starts
→
Checks evaluate
→
Approved → Deploy
Approvals and checks are defined on environments, not directly in YAML pipelines. Stages in the pipeline are then associated with environments.
Manual approvals, business hours, branch control
Pipeline pauses until all checks pass
Approvals and checks are set on environments through the Azure DevOps UI, not in the YAML file itself.
This is a security feature — pipeline authors can't bypass approvals by editing the YAML.
Types include manual approval, business hours gate, branch control, required template,
and invoke Azure Function or REST API checks.
Approvals & Checks — Full List
Checks run in category order — static first, then approvals, then dynamic
Category Check Purpose
Static Branch Control Restrict which branches can deploy
Required Template Enforce use of approved YAML templates
Evaluate Artifact Validate container images against policies
Dynamic Manual Approval Designated approvers must sign off
Business Hours Only deploy within a time window
Invoke Azure Function Serverless custom validation logic
Invoke REST API Call any external service for validation
Query Azure Monitor Verify no alerts after deployment
Lock Exclusive Lock Only one pipeline stage at a time
Extension ServiceNow Change Mgmt Integrate ServiceNow change requests
Azure DevOps has five categories of checks that run in a specific order:
1. Static checks run first: Branch Control validates allowed branches, Required Template
enforces that pipelines extend from approved templates, and Evaluate Artifact checks
container images against custom policies (OPA-style).
2. Pre-check approvals run next.
3. Dynamic checks: Manual Approval requires designated users to sign off (with optional
deferred approval for scheduling). Business Hours gates deployments to a time window.
Invoke Azure Function and Invoke REST API let you call external services for custom
validation — these can be synchronous or asynchronous. Query Azure Monitor Alerts
checks that no alerts fired after a canary deployment.
4. Post-check approvals.
5. Exclusive Lock ensures only one run uses a resource at a time — supports runLatest
and sequential lock behaviours.
ServiceNow Change Management is a marketplace extension that creates and tracks
ServiceNow change requests as a gate.
All checks can be configured on environments, service connections, repos, variable groups,
secure files, and agent pools. Checks can also be temporarily disabled or bypassed by admins.
Templates — Reusing Pipelines
DRY principle for pipelines
Define reusable steps, jobs, or stages
Accept parameters for customization
Share across repos via resources
Enforce governance with extends
Templates are the primary mechanism for reuse in Azure Pipelines.
Instead of copying and pasting YAML across pipelines, you extract common
logic into template files and reference them. There are four template types:
step, job, stage, and variable templates. Plus the extends keyword for governance.
Step Template
Template File
# templates/build-steps.yml
parameters:
- name: configuration
default: 'Release'
steps:
- task: UseDotNet@2
inputs:
version: '8.x'
- script: |
dotnet build \
-c ${{ parameters.configuration }}
Main Pipeline
# azure-pipelines.yml
steps:
- template: templates/build-steps.yml
parameters:
configuration: 'Debug'
Step templates let you define a reusable sequence of steps. They accept parameters
so the calling pipeline can customize behavior. The double-curly-brace syntax
is a compile-time expression — it's resolved before the pipeline runs. This is
the most granular level of template reuse.
Job & Stage Templates
# templates/deploy-stage.yml
parameters:
- name: environment
type: string
- name: vmImage
default: 'ubuntu-latest'
stages:
- stage: Deploy_${{ parameters.environment }}
jobs:
- deployment: Deploy
environment: ${{ parameters.environment }}
pool:
vmImage: ${{ parameters.vmImage }}
strategy:
runOnce:
deploy:
steps:
- script: echo Deploying to ${{ parameters.environment }}
Job and stage templates work the same way — you define a whole job or stage in a
template file and reference it. This is powerful for creating standardized
deployment stages that are reused across many pipelines. Note the use of
parameter expressions in the stage name to create unique identifiers.
Using Stage Templates
# azure-pipelines.yml
trigger:
- main
stages:
- stage: Build
jobs:
- job: BuildApp
steps:
- script: dotnet build
- template: templates/deploy-stage.yml
parameters:
environment: 'Staging'
- template: templates/deploy-stage.yml
parameters:
environment: 'Production'
Here's how you call a stage template. Notice how the same deploy template is reused
for both Staging and Production — just with different parameters. This eliminates
duplication and ensures consistency. If you change the deploy process, you update
one template and all pipelines benefit.
Extends vs Include Templates
Two fundamentally different approaches
Include (template:)
Inserts steps/jobs/stages into your pipeline
Pipeline author has full control
Can add anything before or after
Great for reuse
Extends (extends:)
Your pipeline runs inside the template
Template author has control
Pipeline author fills in the gaps
Great for governance
This is the critical distinction. With include-style templates, the pipeline author is in charge —
they pull in reusable pieces wherever they want. With extends, the template author is in charge —
they define the overall structure and the pipeline author can only customize the parts the template
allows. Think of include as a library you call, and extends as a framework you plug into.
Use include for convenience and reuse. Use extends when you need to enforce standards that
pipeline authors cannot bypass.
Extends Template — How It Works
Template (owned by platform team)
# templates/secure-pipeline.yml
parameters:
- name: buildSteps
type: stepList
default: []
stages:
- stage: Build
jobs:
- job: Build
steps:
- task: CredScan@3
- ${{ each step in parameters.buildSteps }}:
- ${{ step }}
- task: PublishSecurityAnalysis@0
Pipeline (owned by dev team)
# azure-pipelines.yml
trigger:
- main
extends:
template: templates/secure-pipeline.yml
parameters:
buildSteps:
- script: dotnet build
- script: dotnet test
On the left is the template owned by the platform or security team. It defines CredScan
at the start and PublishSecurityAnalysis at the end — these always run and cannot be removed.
The template accepts a buildSteps parameter and iterates over it using the each expression.
On the right, the dev team's pipeline uses extends to plug into this template. They can only
provide their build steps — they cannot skip the security tasks. The template wraps around
the consumer's code like a sandwich.
Enforcing Extends with Checks
Use the Required Template check on environments
Pipeline must extend an approved template to deploy
Prevents teams from bypassing governance
Combine with branch control for full protection
Environment: Production
Check: Pipeline must extend templates/secure-pipeline.yml from MyProject/pipeline-templates
Extends templates become truly powerful when combined with the Required Template check on environments.
You configure the Production environment to require that any pipeline deploying to it must extend
a specific approved template. If a developer creates a pipeline that doesn't extend the template,
it simply cannot deploy to Production — Azure DevOps blocks it at the environment check stage.
This is the recommended governance pattern for enterprise organizations. Add branch control
to ensure only main branch can deploy, and you have a robust, tamper-proof CI/CD pipeline.
Cross-Repo Templates
Share templates across repositories
resources:
repositories:
- repository: templates
type: git
name: MyProject/pipeline-templates
ref: refs/tags/v1.0
stages:
- template: stages/build.yml@templates
parameters:
solution: '**/*.sln'
- template: stages/deploy.yml@templates
parameters:
environment: 'Production'
You can reference templates from other repositories using the resources section.
This is ideal for organizations that maintain a central templates repository.
Use a ref tag to pin to a specific version for stability. The @templates suffix
tells Azure DevOps which repository to pull the template from. This works with
Azure Repos and GitHub repositories.
Service Connections & Artifacts
Service Connections
Secure links to external services
Azure, Docker, Kubernetes, npm, NuGet, etc.
Managed in Project Settings
Credentials stored securely — never in YAML
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: 'my-azure-connection'
appName: 'my-web-app'
package: '$(Build.ArtifactStagingDirectory)/**/*.zip'
Service connections are how pipelines authenticate to external systems without
exposing credentials in your YAML. You create them in Azure DevOps project settings.
The most common is an Azure Resource Manager service connection for deploying to Azure.
They use workload identity federation or service principals under the hood.
Pipeline Artifacts
Pass files between jobs and stages
# Publish artifacts in the Build stage
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'drop'
# Download artifacts in the Deploy stage
- task: DownloadPipelineArtifact@2
inputs:
artifact: 'drop'
path: '$(Pipeline.Workspace)/drop'
Since each job runs on a fresh agent, you use artifacts to pass build outputs
between jobs and stages. Publish your build output as a pipeline artifact in
the Build stage, then download it in the Deploy stage. Deployment jobs
automatically download artifacts from the triggering run.
Putting It All Together
A complete CI/CD pipeline
trigger:
branches:
include: [ main ]
variables:
buildConfiguration: 'Release'
stages:
- stage: Build
jobs:
- job: BuildAndTest
pool: { vmImage: 'ubuntu-latest' }
steps:
- task: UseDotNet@2
inputs: { version: '8.x' }
- script: dotnet build -c $(buildConfiguration)
- script: dotnet test --no-build
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'drop'
- stage: DeployStaging
dependsOn: Build
jobs:
- deployment: Deploy
environment: 'Staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: 'azure-conn'
appName: 'myapp-staging'
- stage: DeployProd
dependsOn: DeployStaging
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
jobs:
- deployment: Deploy
environment: 'Production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: 'azure-conn'
appName: 'myapp-prod'
This is a realistic multi-stage pipeline bringing together everything we've covered:
CI triggers, variables, stages with dependencies, build tasks, artifacts, deployment jobs
with environments, conditions, and Azure service connections. The Production stage
only runs from the main branch and would have manual approval on the environment.
Best Practices
Use templates to avoid duplication
Pin task versions — Task@2 not Task
Store secrets in Variable Groups or Key Vault
Use path triggers to avoid unnecessary runs
Add meaningful displayName to all steps
These practices will save you time and headaches. Templates prevent duplication
and enforce standards. Pinning task versions prevents breaking changes when a task
is updated. Never put secrets in YAML — always use variable groups or Key Vault.
Path triggers are an easy performance win. Display names make logs readable
when debugging failed pipelines.
More Best Practices
Use extends templates for governance
Keep pipelines small — compose with templates
Use environments with approvals for production
Test pipeline changes in feature branches
Document non-obvious pipeline logic in comments
The extends template pattern is essential for enterprise governance.
Keep individual pipeline files small and readable by composing templates.
Always use environments with approval checks for production deployments.
Since pipelines are code, test changes in feature branches before merging to main.
Add YAML comments to explain complex conditions or expressions.
Key Takeaways
📐
Pipeline → Stages → Jobs → Steps
⚡
Triggers control when pipelines run
🔧
Tasks & scripts do the work
📦
Variables store config & secrets
✅
Environments & approvals gate deploys
To wrap up: pipelines are structured as stages, jobs, and steps. Triggers automate when
pipelines run. Tasks and scripts do the actual work. Variables and variable groups manage
configuration. Templates are the key to reusability and governance. And environments with
approvals provide the safety gates around production deployments.
Resources
Here are the key documentation links. The YAML schema reference is particularly useful —
it documents every keyword and property available. Bookmark the task reference for
looking up task inputs. Microsoft keeps these docs up to date with new features.
Hands-On Lab
Build a CI/CD pipeline with Azure DevOps & Bicep
Objectives
Create and run an Azure DevOps YAML pipeline
Connect a pipeline to Azure using a service connection
Validate a Bicep file as a CI step
Deploy infrastructure to Azure (CD)
Configure environments and approval gates
Prerequisites
Access to an Azure DevOps organization & project
The infra/main.bicep file uploaded to your repo
A service connection to your Azure subscription
Time for hands-on practice. Walk through the objectives — by the end participants will
have a full CI/CD pipeline with validation, deployment, and an approval-gated production
environment. Make sure everyone has the prerequisites sorted before starting: DevOps access,
the Bicep file in their repo, and a service connection configured.
Share the lab link so participants can follow along at their own pace.
Questions?
Thank you for attending
Open the floor for questions. Remind attendees about the documentation links
and encourage them to practice by creating a pipeline in a test project.